25 research outputs found
Robust state estimation methods for robotics applications
State estimation is an integral component of any autonomous robotic system. Finding the correct position, velocity, and orientation of an agent in its environment enables it to do other tasks like mapping and interacting with the environment, and collaborating with other agents. State estimation is achieved by using data obtained from multiple sensors and fusing them in a probabilistic framework. These include inertial data from Inertial Measurement Unit (IMU), images from camera, range data from lidars, and positioning data from Global Navigation Satellite Systems (GNSS) receivers. The main challenge faced in sensor-based state estimation is the presence of noisy, erroneous, and even lack of informative data. Some common examples of such situations include wrong feature matching between images or point clouds, false loop-closures due to perceptual aliasing (different places that look similar can confuse the robot), presence of dynamic objects in the environment (odometry algorithms assume a static environment), multipath errors for GNSS (signals for satellites jumping off tall structures like buildings before reaching receivers) and more. This work studies existing and new ways of how standard estimation algorithms like the Kalman filter and factor graphs can be made robust to such adverse conditions without losing performance in ideal outlier-free conditions. The first part of this work demonstrates the importance of robust Kalman filters on wheel-inertial odometry for high-slip terrain. Next, inertial data is integrated into GNSS factor graphs to improve the accuracy and robustness of GNSS factor graphs. Lastly, a combined framework for improving the robustness of non-linear least squares and estimating the inlier noise threshold is proposed and tested with point cloud registration and lidar-inertial odometry algorithms followed by an algorithmic analysis of optimizing generalized robust cost functions with factor graphs for GNSS positioning problem
Scale-Variant Robust Kernel Optimization for Non-linear Least Squares Problems
In this article, we consider the benefit of increasing adaptivity of an
existing robust estimation algorithm by learning two parameters to better fit
the residual distribution. Our method uses these two parameters to calculate
weights for Iterative Re-weighted Least Squares (IRLS). This adaptive nature of
the weights can be helpful in situations where the noise level varies in the
measurements. We test our algorithm first on the point cloud registration
problem with synthetic data sets and lidar odometry with open-source real-world
data sets. We show that the existing approach needs an additional manual tuning
of a residual scale parameter which our method directly learns from data and
has similar or better performance.Comment: Submitted to IEEE Transactions on Aerospace and Electronic Systems.
Correction made to Fig.
Interval Bound Propagation\unicode{x2013}aided Few\unicode{x002d}shot Learning
Few-shot learning aims to transfer the knowledge acquired from training on a
diverse set of tasks, from a given task distribution, to generalize to unseen
tasks, from the same distribution, with a limited amount of labeled data. The
underlying requirement for effective few-shot generalization is to learn a good
representation of the task manifold. One way to encourage this is to preserve
local neighborhoods in the feature space learned by the few-shot learner. To
this end, we introduce the notion of interval bounds from the provably robust
training literature to few-shot learning. The interval bounds are used to
characterize neighborhoods around the training tasks. These neighborhoods can
then be preserved by minimizing the distance between a task and its respective
bounds. We further introduce a novel strategy to artificially form new tasks
for training by interpolating between the available tasks and their respective
interval bounds, to aid in cases with a scarcity of tasks. We apply our
framework to both model-agnostic meta-learning as well as prototype-based
metric-learning paradigms. The efficacy of our proposed approach is evident
from the improved performance on several datasets from diverse domains in
comparison to a sizable number of recent competitors